25 research outputs found

    IcSDE+ -- An Indicator for Constrained Multi-Objective Optimization

    Full text link
    The effectiveness of Constrained Multi-Objective Evolutionary Algorithms (CMOEAs) depends on their ability to reach the different feasible regions during evolution, by exploiting the information present in infeasible solutions, in addition to optimizing the several conflicting objectives. Over the years, researchers have proposed several CMOEAs to handle CMOPs. However, among the different CMOEAs proposed most of them are either decomposition-based or Pareto-based, with little focus on indicator-based CMOEAs. In literature, most indicator-based CMOEAs employ - a) traditional indicators used to solve unconstrained multi-objective problems to find the indicator values using objectives values and combine them with overall constraint violation to solve Constrained Multi-objective Optimization Problem (CMOP) as a single objective constraint problem, or b) consider each constraint or the overall constraint violation as objective(s) in addition to the actual objectives. In this paper, we propose an effective single-population indicator-based CMOEA referred to as IcSDE+ that can explore the different feasible regions in the search space. IcSDE+ is an (I)ndicator, that is an efficient fusion of constraint violation (c), shift-based density estimation (SDE) and sum of objectives (+). The performance of CMOEA with IcSDE+ is favorably compared against 9 state-of-the-art CMOEAs on 6 different benchmark suites with diverse characteristicsComment: 13 pages, 2 main figue

    IDEAL: Improved DEnse locAL Contrastive Learning for Semi-Supervised Medical Image Segmentation

    Full text link
    Due to the scarcity of labeled data, Contrastive Self-Supervised Learning (SSL) frameworks have lately shown great potential in several medical image analysis tasks. However, the existing contrastive mechanisms are sub-optimal for dense pixel-level segmentation tasks due to their inability to mine local features. To this end, we extend the concept of metric learning to the segmentation task, using a dense (dis)similarity learning for pre-training a deep encoder network, and employing a semi-supervised paradigm to fine-tune for the downstream task. Specifically, we propose a simple convolutional projection head for obtaining dense pixel-level features, and a new contrastive loss to utilize these dense projections thereby improving the local representations. A bidirectional consistency regularization mechanism involving two-stream model training is devised for the downstream task. Upon comparison, our IDEAL method outperforms the SoTA methods by fair margins on cardiac MRI segmentation

    Evolutionary Programming based Recommendation System for Online Shopping

    Get PDF
    Abstract-In this paper, we propose an interactive evolutionary programming based recommendation system for online shopping that estimates the human preference based on eye movement analysis. Given a set of images of different clothes, the eye movement patterns of the human subjects while looking at the clothes they like differ from clothes they do not like. Therefore, in the proposed system, human preference is measured from the way the human subjects look at the images of different clothes. In other words, the human preference can be measured by using the fixation count and the fixation length using an eye tracking system. Based on the level of human preference, the evolutionary programming suggests new clothes that close the human preference by operations such as selection and mutation. The proposed recommendation is tested with several human subjects and the experimental results are demonstrated

    Reinforcement Learning-assisted Evolutionary Algorithm: A Survey and Research Opportunities

    Full text link
    Evolutionary algorithms (EA), a class of stochastic search methods based on the principles of natural evolution, have received widespread acclaim for their exceptional performance in various real-world optimization problems. While researchers worldwide have proposed a wide variety of EAs, certain limitations remain, such as slow convergence speed and poor generalization capabilities. Consequently, numerous scholars actively explore improvements to algorithmic structures, operators, search patterns, etc., to enhance their optimization performance. Reinforcement learning (RL) integrated as a component in the EA framework has demonstrated superior performance in recent years. This paper presents a comprehensive survey on integrating reinforcement learning into the evolutionary algorithm, referred to as reinforcement learning-assisted evolutionary algorithm (RL-EA). We begin with the conceptual outlines of reinforcement learning and the evolutionary algorithm. We then provide a taxonomy of RL-EA. Subsequently, we discuss the RL-EA integration method, the RL-assisted strategy adopted by RL-EA, and its applications according to the existing literature. The RL-assisted procedure is divided according to the implemented functions including solution generation, learnable objective function, algorithm/operator/sub-population selection, parameter adaptation, and other strategies. Finally, we analyze potential directions for future research. This survey serves as a rich resource for researchers interested in RL-EA as it overviews the current state-of-the-art and highlights the associated challenges. By leveraging this survey, readers can swiftly gain insights into RL-EA to develop efficient algorithms, thereby fostering further advancements in this emerging field.Comment: 26 pages, 16 figure

    Ensemble strategies with evolutionary programming and differential evolution for solving single objective optimization problems

    No full text
    Evolutionary Algorithms (EAs) are population based algorithms that can tackle complex optimization problems with minimal information about the characteristics of the problem. The performance of Evolutionary Programming (EP), a veteran of the evolutionary computation community depends mostly on the mutation operation, where an offspring is produced from the parent by adding a scaled random number distribution. In EP, the scale factor is referred to as the strategy parameter and is self-adapted using a lognormal adaptation. The abrupt reduction in the strategy parameter values due to the lognormal self-adaptation may result in the premature convergence of the search process. To overcome the drawbacks of lognormal self-adaptation, we propose an adaptive EP (AEP). AEP is different from EP in terms of initialization and adaptation of the strategy parameter values. The parameters are initialized scaled to the search range and are adapted based on the search performance in the previous few generations.Doctor of Philosoph

    Investigation of the effects of sampling, delay and discretization on the control of attitude and orientation of small satellites

    No full text
    81 p.Onboard the satellite, data needs to be collected form various sensing devices for autonomous operations like attitude control, health monitoring, payload operations, orbit and attitude control. Out of these the Attitude Determination and Control System (ADCS) is a key subsystem in small satellites like X-Sat which find application in earth observation.Master of Science (Computer Control and Automation

    UEQMS: UMAP Embedded Quick Mean Shift Algorithm for High Dimensional Clustering

    No full text
    The mean shift algorithm is a simple yet very effective clustering method widely used for image and video segmentation as well as other exploratory data analysis applications. Recently, a new algorithm called MeanShift++ (MS++) for low-dimensional clustering was proposed with a speedup of 4000 times over the vanilla mean shift. In this work, starting with a first-of-its-kind theoretical analysis of MS++, we extend its reach to high-dimensional data clustering by integrating the Uniform Manifold Approximation and Projection (UMAP) based dimensionality reduction in the same framework. Analytically, we show that MS++ can indeed converge to a non-critical point. Subsequently, we suggest modifications to MS++ to improve its convergence characteristics. In addition, we propose a way to further speed up MS++ by avoiding the execution of the MS++ iterations for every data point. By incorporating UMAP with modified MS++, we design a faster algorithm, named UMAP embedded quick mean shift (UEQMS), for partitioning data with a relatively large number of recorded features. Through extensive experiments, we showcase the efficacy of UEQMS over other state-of-the-art algorithms in terms of accuracy and runtime

    Adaptive evolutionary programming with p-best mutation strategy

    No full text
    Although initially conceived for evolving finite state machines, Evolutionary Programming (EP), in its present form, is largely used as a powerful real parameter optimizer. For function optimization, EP mainly relies on its mutation operators. Over past few years several mutation operators have been proposed to improve the performance of EP on a wide variety of numerical benchmarks. However, unlike real-coded GAs, there has been no fitness-induced bias in parent selection for mutation in EP. That means the i-th population member is selected deterministically for mutation and creation of the i-th offspring in each generation. In this article we propose a p-best mutation scheme for EP where any one from the p (p∈[1,2,…,μ], where μ denotes population size) top-ranked population-members (according to fitness values) is selected randomly for mutation. The scheme is invoked with 50% probability with each index in the current population, i.e. the i-th offspring can now be obtained either by mutating the i-th parent or by mutating a randomly selected individual from the p top-ranked vectors. The percentage of best members is made dynamic by decreasing p in from μ/2 to 1 with generations to favor explorative behavior at the early stages of search and exploitation during the later stages. We investigate the effectiveness of introducing controlled bias in parent selection in conjunction with an Adaptive Fast EP (AFEP), where the value of a strategy parameter is updated based on the previous records of successful mutations by the same parameter. Comparison with the recent and best-known versions of EP over 25 benchmark functions from the CEC (Congress on Evolutionary Computation) 2005 test-suite for real-parameter optimization and two other engineering optimization problems reflects the statistically validated superiority of the new scheme in terms of final accuracy, speed, and robustness. Comparison with AFEP without p-best mutation demonstrates the improvement of performance due to the proposed mutation scheme alone

    ISDE+ — an indicator for multi and many-objective optimization

    No full text
    In this letter, an efficient indicator for multi and many-objective optimization is proposed. The proposed indicator (ISDE+) is a combination of sum of objectives and shift-based density estimation and benefits from their ability to promote convergence and diversity, respectively. An evolutionary multiobjective optimization framework based on the proposed indicator is shown to perform comparably or better than the state-of-the-art on a variety of scalable benchmark problems
    corecore